gradient-based meta-learning
SHOT: Suppressing the Hessian along the Optimization Trajectory for Gradient-Based Meta-Learning
Based on this hypothesis, we introduce an algorithm called SHOT (Suppressing the Hessian along the Optimization Trajectory) that minimizes the distance between the parameters of the target and reference models to suppress the Hessian in the inner loop. Despite dealing with high-order terms, SHOT does not increase the computational complexity of the baseline model much. It is agnostic to both the algorithm and architecture used in GBML, making it highly versatile and applicable to any GBML baseline. To validate the effectiveness of SHOT, we conduct empirical tests on standard few-shot learning tasks and qualitatively analyze its dynamics. We confirm our hypothesis empirically and demonstrate that SHOT outperforms the corresponding baseline.
SHOT: Suppressing the Hessian along the Optimization Trajectory for Gradient-Based Meta-Learning
Based on this hypothesis, we introduce an algorithm called SHOT (Suppressing the Hessian along the Optimization Trajectory) that minimizes the distance between the parameters of the target and reference models to suppress the Hessian in the inner loop. Despite dealing with high-order terms, SHOT does not increase the computational complexity of the baseline model much. It is agnostic to both the algorithm and architecture used in GBML, making it highly versatile and applicable to any GBML baseline. To validate the effectiveness of SHOT, we conduct empirical tests on standard few-shot learning tasks and qualitatively analyze its dynamics. We confirm our hypothesis empirically and demonstrate that SHOT outperforms the corresponding baseline.
SHOT: Suppressing the Hessian along the Optimization Trajectory for Gradient-Based Meta-Learning
Lee, JunHoo, Yoo, Jayeon, Kwak, Nojun
In this paper, we hypothesize that gradient-based meta-learning (GBML) implicitly suppresses the Hessian along the optimization trajectory in the inner loop. Based on this hypothesis, we introduce an algorithm called SHOT (Suppressing the Hessian along the Optimization Trajectory) that minimizes the distance between the parameters of the target and reference models to suppress the Hessian in the inner loop. Despite dealing with high-order terms, SHOT does not increase the computational complexity of the baseline model much. It is agnostic to both the algorithm and architecture used in GBML, making it highly versatile and applicable to any GBML baseline. To validate the effectiveness of SHOT, we conduct empirical tests on standard few-shot learning tasks and qualitatively analyze its dynamics. We confirm our hypothesis empirically and demonstrate that SHOT outperforms the corresponding baseline. Code is available at: https://github.com/JunHoo-Lee/SHOT
On the Subspace Structure of Gradient-Based Meta-Learning
Tegnér, Gustaf, Reichlin, Alfredo, Yin, Hang, Björkman, Mårten, Kragic, Danica
In this work we provide an analysis of the distribution of the post-adaptation parameters of Gradient-Based Meta-Learning (GBML) methods. Previous work has noticed how, for the case of imageclassification, this adaptation only takes place on the last layers of the network. We propose the more general notion that parameters are updated over a low-dimensional subspace of the same dimensionality as the task-space and show that this Figure 1: The space of task-adapted parameters for a sine holds for regression as well. Furthermore, the regression task embedded in two-dimensional space. The induced subspace structure provides a method to polar-coordinate structure of the task is preserved in the estimate the intrinsic dimension of the space of space of task-adapted parameters.
- North America > United States > Maryland > Baltimore (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
Provable Guarantees for Gradient-Based Meta-Learning
Khodak, Mikhail, Balcan, Maria-Florina, Talwalkar, Ameet
We study the problem of meta-learning through the lens of online convex optimization, developing a meta-algorithm bridging the gap between popular gradient-based meta-learning and classical regularization-based multi-task transfer methods. Our method is the first to simultaneously satisfy good sample efficiency guarantees in the convex setting, with generalization bounds that improve with task-similarity, while also being computationally scalable to modern deep learning architectures and the many-task setting. Despite its simplicity, the algorithm matches, up to a constant factor, a lower bound on the performance of any such parameter-transfer method under natural task similarity assumptions. We use experiments in both convex and deep learning settings to verify and demonstrate the applicability of our theory.
- Europe > Russia (0.04)
- Asia > Russia (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Asia > Japan > Honshū > Tōhoku (0.04)